111 research outputs found
Joint prediction of travel mode choice and purpose from travel surveys: A multitask deep learning approach
The prediction and behavioural analysis of travel mode choice and purpose are critical for transport planning and have attracted increasing interest in research. Traditionally, the prediction of travel mode choice and trip purpose has been tackled separately, which fail to fully leverage the shared information between travel mode and purpose. This study addresses this gap by proposing a multitask learning deep neural network framework (MTLDNN) to jointly predict mode choice and purpose. We empirically evaluate and validate this framework using the household travel survey data in Greater London, UK. The results show that this framework has significantly lower cross-entropy loss than multinomial logit models (MNL) and single-task-learning deep neural network models (STLDNN). On the other hand, the predictive accuracy of MTLDNN is similar to STLDNN and is significantly higher than MNL. Moreover, in terms of behaviour analysis, the substitution pattern and choice probability of MTLDNN regarding input variables largely agree with MNL and STLDNN. This work demonstrates that MTLDNN is efficient in utilising the information shared by travel mode choice and purpose, and is capable of producing behaviourally reasonable substitution patterns across travel modes. Future research would develop more advanced MTLDNN frameworks for travel behaviour analysis and generalise MTLDNN to other travel behaviour topics
Transformer-based Annotation Bias-aware Medical Image Segmentation
Manual medical image segmentation is subjective and suffers from
annotator-related bias, which can be mimicked or amplified by deep learning
methods. Recently, researchers have suggested that such bias is the combination
of the annotator preference and stochastic error, which are modeled by
convolution blocks located after decoder and pixel-wise independent Gaussian
distribution, respectively. It is unlikely that convolution blocks can
effectively model the varying degrees of preference at the full resolution
level. Additionally, the independent pixel-wise Gaussian distribution
disregards pixel correlations, leading to a discontinuous boundary. This paper
proposes a Transformer-based Annotation Bias-aware (TAB) medical image
segmentation model, which tackles the annotator-related bias via modeling
annotator preference and stochastic errors. TAB employs the Transformer with
learnable queries to extract the different preference-focused features. This
enables TAB to produce segmentation with various preferences simultaneously
using a single segmentation head. Moreover, TAB takes the multivariant normal
distribution assumption that models pixel correlations, and learns the
annotation distribution to disentangle the stochastic error. We evaluated our
TAB on an OD/OC segmentation benchmark annotated by six annotators. Our results
suggest that TAB outperforms existing medical image segmentation models which
take into account the annotator-related bias.Comment: 11 pages, 2 figure
A Parallel Structured Divide-and-Conquer Algorithm for Symmetric Tridiagonal Eigenvalue Problems
© 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] In this article, a parallel structured divide-and-conquer (PSDC) eigensolver is proposed for symmetric tridiagonal matrices based on ScaLAPACK and a parallel structured matrix multiplication algorithm, called PSMMA. Computing the eigenvectors via matrix-matrix multiplications is the most computationally expensive part of the divide-and-conquer algorithm, and one of the matrices involved in such multiplications is a rank-structured Cauchy-like matrix. By exploiting this particular property, PSMMA constructs the local matrices by using generators of Cauchy-like matrices without any communication, and further reduces the computation costs by using a structured low-rank approximation algorithm. Thus, both the communication and computation costs are reduced. Experimental results show that both PSMMA and PSDC are highly scalable and scale to 4096 processes at least. PSDC has better scalability than PHDC that was proposed in [16] and only scaled to 300 processes for the same matrices. Comparing with PDSTEDC in ScaLAPACK, PSDC is always faster and achieves 1.4x-1.6x speedup for some matrices with few deflations. PSDC is also comparable with ELPA, with PSDC being faster than ELPA when using few processes and a little slower when using many processes.The authors would like to thank the referees for their valuable comments which greatly improve the presentation of this article. This work was supported by National Natural Science Foundation of China (No. NNW2019ZT6-B20, NNW2019ZT6B21, NNW2019ZT5-A10, U1611261, 61872392, and U1811461), National Key RD Program of China (2018YFB0204303), NSF of Hunan (No. 2019JJ40339), NSF of NUDT (No. ZK18-03-01), Guangdong Natural Science Foundation (2018B030312002), and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams under Grant 2016ZT06D211. The work of Jose E. Roman was supported by the Spanish Agencia Estatal de Investigacion (AEI) under project SLEPc-DA (PID2019-107379RB-I00).Liao, X.; Li, S.; Lu, Y.; Román Moltó, JE. (2021). A Parallel Structured Divide-and-Conquer Algorithm for Symmetric Tridiagonal Eigenvalue Problems. IEEE Transactions on Parallel and Distributed Systems. 32(2):367-378. https://doi.org/10.1109/TPDS.2020.3019471S36737832
Discrepancy Matters: Learning from Inconsistent Decoder Features for Consistent Semi-supervised Medical Image Segmentation
Semi-supervised learning (SSL) has been proven beneficial for mitigating the
issue of limited labeled data especially on the task of volumetric medical
image segmentation. Unlike previous SSL methods which focus on exploring highly
confident pseudo-labels or developing consistency regularization schemes, our
empirical findings suggest that inconsistent decoder features emerge naturally
when two decoders strive to generate consistent predictions. Based on the
observation, we first analyze the treasure of discrepancy in learning towards
consistency, under both pseudo-labeling and consistency regularization
settings, and subsequently propose a novel SSL method called LeFeD, which
learns the feature-level discrepancy obtained from two decoders, by feeding the
discrepancy as a feedback signal to the encoder. The core design of LeFeD is to
enlarge the difference by training differentiated decoders, and then learn from
the inconsistent information iteratively. We evaluate LeFeD against eight
state-of-the-art (SOTA) methods on three public datasets. Experiments show
LeFeD surpasses competitors without any bells and whistles such as uncertainty
estimation and strong constraints, as well as setting a new state-of-the-art
for semi-supervised medical image segmentation. Code is available at
\textcolor{cyan}{https://github.com/maxwell0027/LeFeD
A note on additive complements of the squares
Let be the set of squares and
be an additive
complement of so that for some . Let
.
In 2017, Chen-Fang \cite{C-F} studied the lower bound of
. In this note, we improve
Cheng-Fang's result and get that
As an application,
we make some progress on a problem of Ben Green problem by showing that
Comment: The new version significantly improves the result of the former on
- …